Published May 31, 2023
By Kimberly Mann Bruch
The MLCommons Science Working Group encourages and supports the curation of large-scale experimental and scientific datasets and the engineering of machine learning (ML) benchmarks operating on those datasets. Tirelessly working to advance ML and artificial intelligence (AI) within the research community, the Working Group has recently released four testing benchmarks focused on cloudmasking, geophysics, scanning transmission electron microscopes (STEMDL) and cancer distributed learning environments (CANDLE UNO).
“We are thrilled to invite researchers to run these benchmarks and report back to us on the systems that they used and how long it takes them to run,” said Christine Kirkpatrick, the director of Research Data Services at the San Diego Supercomputer Center at UC San Diego and an active participant in MLCommons. “For instance, we are really interested in better understanding how ML can be more efficient based on the choices made with hardware and software.”
Kirkpatrick leads several initiatives focused on findable, accessible, interoperable and reusable (FAIR) data and other research data management concepts. In this role, she looks for where data intersects with ML. Her most recent efforts involve the FAIR in ML, AI Readiness, AI Reproducibility (FARR) project, which has been funded by the National Science Foundation to promote better practices for AI, improve efficiency and reproducibility, and explore research gaps and priorities for data-centric AI.
“The MLCommons benchmarks provide a wealth of computer science data to mine with the ultimate goal of finding new efficiencies,” Kirkpatrick said. “This aligns perfectly with the efforts of the FARR project and we are eager to learn how applying information science techniques to these benchmarks makes the data more (re)usable.”
Geoffrey Fox, a computer science professor at the University of Virginia and co-founder of the Working Group noted, “These benchmarks represent useful tutorials on methods that can be used across many disciplines in AI for Science.”
To access the MLCommons Science Working Group benchmarks and to participate or learn more about the group and its work, visit the website.
Share